Learning Grasp Affordances Through Human Demonstration
نویسندگان
چکیده
When presented with an object to be manipulated, a robot must identify the available forms of interaction. How might an agent acquire this mapping from object representation to action? In this paper, we describe an approach that learns a mapping from objects to grasps from human demonstration. For a given object, the teacher demonstrates a set of feasible grasps. We cluster these grasps in terms of the position and orientation of the hand relative to the object. Individual clusters in this pose space are represented using probability density functions, and thus correspond to variations around canonical grasp approaches. Multiple clusters are captured through a mixture distribution-based representation. Experimental results demonstrate the feasibility of extracting a compact set of canonical grasps from the human demonstration. Each of these canonical grasps can then be used to parameterize a reach controller that brings the robot hand into a specific spatial relationship with the object.
منابع مشابه
Learning Continuous Grasp Affordances by Sensorimotor Exploration
We develop means of learning and representing object grasp affordances probabilistically. By grasp affordance, we refer to an entity that is able to assess whether a given relative object-gripper configuration will yield a stable grasp. These affordances are represented with grasp densities, continuous probability density functions defined on the space of 3D positions and orientations. Grasp de...
متن کاملParental scaffolding as a bootstrapping mechanism for learning grasp affordances and imitation skills
Parental scaffolding is an important mechanism utilized by infants during their development. Infants, for example, pay stronger attention to the features of objects highlighted by parents and learn the way of manipulating an object while being supported by parents. Parents are known to make modifications in infant-directed actions, i.e. use “motionese”. Motionese is characterized by higher rang...
متن کاملLearning Objects and Grasp Affordances through Autonomous Exploration
We describe a system for autonomous learning of visual object representations and their grasp affordances on a robot-vision system. It segments objects by grasping and moving 3D scene features, and creates probabilistic visual representations for object detection, recognition and pose estimation, which are then augmented by continuous characterizations of grasp affordances generated through bia...
متن کاملVisual object-action recognition: Inferring object affordances from human demonstration
This paper investigates object categorization according to function, i.e., learning the affordances of objects from human demonstration. Object affordances (functionality) are inferred from observations of humans using the objects in different types of actions. The intended application is learning from demonstration, in which a robot learns to employ objects in household tasks, from observing a...
متن کاملRelational Affordance Learning for Task-Dependent Robot Grasping
Robot grasping depends on the specific manipulation scenario: the object, its properties, task and grasp constraints. Object-task affordances facilitate semantic reasoning about pre-grasp configurations with respect to the intended tasks, favouring good grasps. We employ probabilistic rule learning to recover such object-task affordances for task-dependent grasping from realistic video data.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2008